Goto

Collaborating Authors

 AAAI AI-Alert Ethics for Mar 23, 2021


Futurium

#artificialintelligence

Recent study by Dr Kristina Irion joins up three EU policy areas that intersect in the digital age: consumer protection, EU governance of AI and EU external trade. "They are becoming so intertwined', says Dr. Irion. 'My research is breaking up the silos and drawing insights from their interactions.' According to the research, the source code clause within trade law restricts the EU's right to regulate AI policy. Dr Irion's study concludes that the EU position on source code in international trade agreements limits the EU's ability to regulate AI in the interests of consumers.


How will Singapore ensure responsible AI use?

#artificialintelligence

Since 2019, government-sponsored initiatives around AI have proliferated across Asia Pacific. Such initiatives include the setting up of cross-domain AI ethics councils, guidelines and frameworks for the responsible use of AI, and other initiatives such as financial and technology support. The majority of these initiatives builds on the country's respective data privacy and protection acts. This is a clear sign that governments see the need to expand existing regulations when it comes to leveraging AI as a key driver for digital economies. All initiatives to date are voluntary in nature, but there are indications already that existing data privacy and protection laws will be updated and expanded to include AI.

  AI-Alerts: 2021 > 2021-03 > AAAI AI-Alert Ethics for Mar 23, 2021 (1.00)
  Country: Asia > Singapore (0.62)
  Genre: Overview (0.40)

'This is bigger than just Timnit': How Google tried to silence a critic and ignited a movement

#artificialintelligence

Timnit Gebru--a giant in the world of AI and then co-lead of Google's AI ethics team--was pushed out of her job in December. Gebru had been fighting with the company over a research paper that she'd coauthored, which explored the risks of the AI models that the search giant uses to power its core products--the models are involved in almost every English query on Google, for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors' names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru's departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff--despite Gebru's credentials and history of groundbreaking research.